Self-Decoupling and Ensemble Distillation for Efficient Segmentation

نویسندگان

چکیده

Knowledge distillation (KD) is a promising teacher-student learning paradigm that transfers information from cumbersome teacher to student network. To avoid the training cost of large network, recent studies propose distill knowledge itself, called Self-KD. However, due limitations performance and capacity student, soft-labels or features distilled by barely provide reliable guidance. Moreover, most Self-KD algorithms are specific classification tasks based on soft-labels, not suitable for semantic segmentation. alleviate these contradictions, we revisit label feature problem in segmentation, Self-Decoupling Ensemble Distillation Efficient Segmentation (SDES). Specifically, design decoupled prediction ensemble (DPED) algorithm generates with multiple expert decoders, (DFED) mechanism utilize more important channel-wise maps encoder learning. The extensive experiments three public segmentation datasets demonstrate superiority our approach efficacy each component framework through ablation study.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Efficient Knowledge Distillation from an Ensemble of Teachers

This paper describes the effectiveness of knowledge distillation using teacher student training for building accurate and compact neural networks. We show that with knowledge distillation, information from multiple acoustic models like very deep VGG networks and Long Short-Term Memory (LSTM) models can be used to train standard convolutional neural network (CNN) acoustic models for a variety of...

متن کامل

Ensemble Distillation for Neural Machine Translation

Knowledge distillation describes a method for training a student network to perform better by learning from a stronger teacher network. In this work, we run experiments with different kinds of teacher networks to enhance the translation performance of a student Neural Machine Translation (NMT) network. We demonstrate techniques based on an ensemble and a best BLEU teacher network. We also show ...

متن کامل

the search for the self in becketts theatre: waiting for godot and endgame

this thesis is based upon the works of samuel beckett. one of the greatest writers of contemporary literature. here, i have tried to focus on one of the main themes in becketts works: the search for the real "me" or the real self, which is not only a problem to be solved for beckett man but also for each of us. i have tried to show becketts techniques in approaching this unattainable goal, base...

15 صفحه اول

ensemble semi-supervised framework for brain mris tissue segmentation

brain mr images tissue segmentation is one of the most important parts of the clinical diagnostic tools. pixel classification methods have been frequently used in the image segmentation with two supervised and unsupervised approaches up to now. supervised segmentation methods lead to high accuracy but they need a large amount of labeled data, which is hard, expensive and slow to obtain. moreove...

متن کامل

Efficient Pruning Method for Ensemble Self-Generating Neural Networks

Recently, multiple classifier systems (MCS) have been used for practical applications to improve classification accuracy. Self-generating neural networks (SGNN) are one of the suitable base-classifiers for MCS because of their simple setting and fast learning. However, the computation cost of the MCS increases in proportion to the number of SGNN. In this paper, we propose an efficient pruning m...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i2.25266